本文研究了多任务高维线性回归模型,其中不同任务之间的噪声是相关的,在中等高的维度状态下,样本量$ n $和dimension $ p $是相同的订单。我们的目标是估计噪声随机向量的协方差矩阵,或等效地在任何两个任务上的噪声变量的相关性。将回归系数视为滋扰参数,我们利用多任务弹性网络和多任务套索估计器来估计滋扰。通过准确理解平方残留矩阵的偏置并纠正这种偏见,我们开发了一个新颖的噪声协方差估计器,该噪声协方差以frobenius norm的收敛,以$ n^{ - 1/2} $为$ n^{ - 1/2} $。这个新颖的估计器是有效的计算。在适当的条件下,提出的噪声协方差估计器的收敛速率与事先知道多任务模型回归系数的“甲骨文”估计器相同。本文获得的FROBENIUS误差界限还说明了该新估计量的优势,而不是试图估计滋扰的方法估计器。作为我们技术的副产品,我们获得了多任务弹性NET和多任务套索估计器的概括误差的估计。进行了广泛的仿真研究,以说明该方法的数值性能。
translated by 谷歌翻译
我们考虑从具有未知链路功能的单个索引模型,高斯协变量和正则化的M估计器$ \ hat \ beta $从凸损失功能和正常制度构建的正规M-估计器$ \ hat \ beta $。在样本量$ n $和dimension $ p $的制度中,$ p/n $具有有限限制,$ \ hat \ beta $的经验分布的行为和预测的值$ x \ hat \ beta $以前已经在许多模型中表征:已知经验分布会在相关的高斯序列模型中收敛到损失和罚款的近端操作员,该模型捕获了比率$ p/n $之间的相互作用,损失,正则化,以及数据生成过程。 $(\ hat \ beta,x \ hat \ beta)$与相应的近端运算符之间的这种连接需要求解固定点方程,这些方程通常涉及无法观察到的数量,例如索引或链接函数上的先验分布。本文开发了一个不同的理论来描述$ \ hat \ beta $和$ x \ hat \ beta $:$(\ hat \ beta,x \ hat \ beta)$的经验分布。这仅涉及可观察到的调整。这些提出的可观察到的调整是数据驱动的,例如,不需要对索引或链接函数的先验知识。这些新的调整产生了索引各个组件的置信区间,以及$ \ hat \ beta $与索引的相关性的估计器。因此,以数据驱动的方式捕获损失,正则化和模型之间的相互作用,而无需求解以前工作中研究的固定点方程。结果既适用于强烈凸正则化器和未注册的M估计。为单个索引模型的正方形和逻辑损失提供了模拟,包括逻辑回归和1位压缩感测,具有20 \%损坏的位。
translated by 谷歌翻译
正如人类和动物在自然世界中学习的那样,它们会遇到远非统一的实体,情况和事件的分布。通常,经常遇到相对较小的经历,而许多重要的体验很少发生。现实的高度紧密,重尾的本质构成了人类和动物通过不断发展的专业记忆系统所面临的特殊学习挑战。相比之下,大多数流行的RL环境和基准涉及属性,对象,情况或任务的大致变化。 RL算法将如何在环境特征分布的世界(如我们的)中表现出较不统一的分布?为了探讨这个问题,我们开发了三个互补的RL环境,在这些环境中,代理商的经验根据Zipfian(离散幂定律)分布而变化。在这些基准上,我们发现标准的深入RL体系结构和算法获得了对常见情况和任务的有用知识,但无法充分了解稀有的情况。为了更好地了解这一失败,我们探讨了如何调整当前方法的不同方面,以帮助提高罕见事件的性能,并表明RL目标功能,代理商的记忆系统和自我监督的学习目标都可以影响代理商的能力从罕见的体验中学习。这些结果共同表明,从偏斜的经验中进行强大的学习是应用模拟或实验室以外的深度RL方法的关键挑战,而我们的Zipfian环境为衡量未来的进步朝着这一目标提供了基础。
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
The purpose of this work was to tackle practical issues which arise when using a tendon-driven robotic manipulator with a long, passive, flexible proximal section in medical applications. A separable robot which overcomes difficulties in actuation and sterilization is introduced, in which the body containing the electronics is reusable and the remainder is disposable. A control input which resolves the redundancy in the kinematics and a physical interpretation of this redundancy are provided. The effect of a static change in the proximal section angle on bending angle error was explored under four testing conditions for a sinusoidal input. Bending angle error increased for increasing proximal section angle for all testing conditions with an average error reduction of 41.48% for retension, 4.28% for hysteresis, and 52.35% for re-tension + hysteresis compensation relative to the baseline case. Two major sources of error in tracking the bending angle were identified: time delay from hysteresis and DC offset from the proximal section angle. Examination of these error sources revealed that the simple hysteresis compensation was most effective for removing time delay and re-tension compensation for removing DC offset, which was the primary source of increasing error. The re-tension compensation was also tested for dynamic changes in the proximal section and reduced error in the final configuration of the tip by 89.14% relative to the baseline case.
translated by 谷歌翻译
Compliance in actuation has been exploited to generate highly dynamic maneuvers such as throwing that take advantage of the potential energy stored in joint springs. However, the energy storage and release could not be well-timed yet. On the contrary, for multi-link systems, the natural system dynamics might even work against the actual goal. With the introduction of variable stiffness actuators, this problem has been partially addressed. With a suitable optimal control strategy, the approximate decoupling of the motor from the link can be achieved to maximize the energy transfer into the distal link prior to launch. However, such continuous stiffness variation is complex and typically leads to oscillatory swing-up motions instead of clear launch sequences. To circumvent this issue, we investigate decoupling for speed maximization with a dedicated novel actuator concept denoted Bi-Stiffness Actuation. With this, it is possible to fully decouple the link from the joint mechanism by a switch-and-hold clutch and simultaneously keep the elastic energy stored. We show that with this novel paradigm, it is not only possible to reach the same optimal performance as with power-equivalent variable stiffness actuation, but even directly control the energy transfer timing. This is a major step forward compared to previous optimal control approaches, which rely on optimizing the full time-series control input.
translated by 谷歌翻译
The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
translated by 谷歌翻译
Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs. This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications. Although feasible, these deep learning methods are still constrained by training time and memory. Tackling these shortcomings, Tensor Neural Networks (TNN) demonstrate that they can provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. Besides TNN, we also introduce Tensor Network Initializer (TNN Init), a weight initialization scheme that leads to faster convergence with smaller variance for an equivalent parameter count as compared to a DNN. We benchmark TNN and TNN Init by applying them to solve the parabolic PDE associated with the Heston model, which is widely used in financial pricing theory.
translated by 谷歌翻译
The task of reconstructing 3D human motion has wideranging applications. The gold standard Motion capture (MoCap) systems are accurate but inaccessible to the general public due to their cost, hardware and space constraints. In contrast, monocular human mesh recovery (HMR) methods are much more accessible than MoCap as they take single-view videos as inputs. Replacing the multi-view Mo- Cap systems with a monocular HMR method would break the current barriers to collecting accurate 3D motion thus making exciting applications like motion analysis and motiondriven animation accessible to the general public. However, performance of existing HMR methods degrade when the video contains challenging and dynamic motion that is not in existing MoCap datasets used for training. This reduces its appeal as dynamic motion is frequently the target in 3D motion recovery in the aforementioned applications. Our study aims to bridge the gap between monocular HMR and multi-view MoCap systems by leveraging information shared across multiple video instances of the same action. We introduce the Neural Motion (NeMo) field. It is optimized to represent the underlying 3D motions across a set of videos of the same action. Empirically, we show that NeMo can recover 3D motion in sports using videos from the Penn Action dataset, where NeMo outperforms existing HMR methods in terms of 2D keypoint detection. To further validate NeMo using 3D metrics, we collected a small MoCap dataset mimicking actions in Penn Action,and show that NeMo achieves better 3D reconstruction compared to various baselines.
translated by 谷歌翻译
Rigorous guarantees about the performance of predictive algorithms are necessary in order to ensure their responsible use. Previous work has largely focused on bounding the expected loss of a predictor, but this is not sufficient in many risk-sensitive applications where the distribution of errors is important. In this work, we propose a flexible framework to produce a family of bounds on quantiles of the loss distribution incurred by a predictor. Our method takes advantage of the order statistics of the observed loss values rather than relying on the sample mean alone. We show that a quantile is an informative way of quantifying predictive performance, and that our framework applies to a variety of quantile-based metrics, each targeting important subsets of the data distribution. We analyze the theoretical properties of our proposed method and demonstrate its ability to rigorously control loss quantiles on several real-world datasets.
translated by 谷歌翻译